Remote Sensing (RS)
Samaneh Bagheri; Mahmoud Soorghali; Hassan Emami
Abstract
Extended Abstract
1-Introduction
Monitoring vegetation changes is crucial for environmental planning and management, and satellite images offer various methods for detecting these changes, each with its own advantages and disadvantages. The use of various plant indices from remote sensing (RS) systems ...
Read More
Extended Abstract
1-Introduction
Monitoring vegetation changes is crucial for environmental planning and management, and satellite images offer various methods for detecting these changes, each with its own advantages and disadvantages. The use of various plant indices from remote sensing (RS) systems is utilized to evaluate changes and create thematic maps for monitoring diverse plant cover. Today, RS indices are widely used in research projects in specialized fields, such as vegetation health, stress assessment, plant development rate, and plant greenness, to evaluate vegetation health, stress types, and plant illnesses. Hyperspectral imagery, particularly from the red and near-infrared bands in the electromagnetic spectrum (690-740 nm), has been widely used to derive vegetation indices. This project intends to monitor the forest risk regions of a segment of northern Iran's forests in 2020 using a combination of various indices produced by RS data and a geographic information system (GIS). Prisma hyperspectral images were used to assess the health of forests in Northern Iran's Rudsar, Ramsar, and Tonkabon forests, focusing on water stress, insufficient growth, plant pests, diseases, and greenness. Forest areas are divided into five risk-acceptance regions using RS indices, and the data is analyzed using various GIS weighting methods to determine the remaining dangerous forest regions.
2- Methodology
The study utilized twelve plant indices from three categories (greenness, growth, leaf pigments, and leaf surface moisture) and four other individual vegetation indices using various techniques. Based on this, the study selected sixteen forest risk-taking maps from five classes with varying risk-taking potential, weighted the layers using hierarchical analysis, and generated a final map based on the obtained weights. When the average results of combined and individual indices were compared with the classification map, it was discovered that the combined indices were more accurate than the individual indices. Existing composite indices are categorized into three broad groups: plant greenness, leaf pigment, and productivity of water or light usage in the vegetation canopy. The three primary characteristics each possess multiple indices that can be combined to provide crucial insights into forest health.
3- Results and discussion
The study reveals that when combined with appropriate indices, combined indices can provide high accuracy in the risk assessment of forest areas in the north of the country. In contrast, an incorrect combination can result in low-accuracy outcomes. The study found that the combined indices had a 11% error in two high-risk forest areas, while individual indices had a nearly double error of 21%. The use of composite indices significantly reduces the inaccuracy of calculating forest risk regions by 50% and enhances the accuracy of monitoring these areas. Furthermore, when the combined indices were examined independently, the findings revealed that the combination of the VCN and VCNW indices yielded the maximum accuracy. These compounds are highly effective in assessing the health of vegetation, assessing plant stress, and determining plant water content. On the other hand, the combined indexes from RC were less accurate than the previous combination, with the highest accuracy levels being SIPI, NDII, NDWI, and WBI. These synthetic substances are utilized in the fields of plant health and stress assessment. The accuracy of SIPI, NDII, NDWI, WBI1, PRI1, and RGRI is significantly reduced when combined with the NC index. The combination's low accuracy may be due to the NDVI index's limitations, as it is primarily used to detect vegetation presence or absence and does not detect plant health or stress. The study presents the first results from research on plant stress in northern Iranian forests using Prisma hyperspectral data. Hyperspectral data is chosen for its superior spatial, spectral, and radiometric resolution, making it ideal for studying dynamic ecosystems in the current research region. Hyperspectral RS allows for non-destructive monitoring of leaf pigments like chlorophyll, carotenoids, and anthocyanin content, crucial indicators of vegetation health. Therefore, the recommendation is to employ a combination of indices with diverse approaches in hyperspectral images instead of using individual indices for monitoring vegetation usage.
4- Conclusion:
Forest health monitoring is a crucial aspect of forest management programs, and utilizing RS techniques and data can be highly beneficial in this field. The study compared the accuracy of combined indices and individual indices using the classification map, revealing that combined indices were more precise. In addition, the results showed that in almost two high-risk classes of the forest area, the combined indicators have an error of 11% and the individual indicators have an error of almost twice their error, 21%. Therefore, composite indices significantly reduce forest risk area estimation errors by 50% and improve accuracy. Therefore, it's recommended to use a combination of indices with different approaches in hyperspectral images instead of individual indices for monitoring vegetation usage.
Extraction, processing, production and display of geographic data
Misagh Sepehry amin; Hassan Emami
Abstract
Extended AbstractIntroductionA digital orthophoto is a reliable, accurate, and low-cost map for acquiring knowledge, including geolocation, distance, area, and changes in imagery features. It is now considered one of the most widely used and sophisticated digital photogrammetry products. Orthophoto map ...
Read More
Extended AbstractIntroductionA digital orthophoto is a reliable, accurate, and low-cost map for acquiring knowledge, including geolocation, distance, area, and changes in imagery features. It is now considered one of the most widely used and sophisticated digital photogrammetry products. Orthophoto map creation is substantially faster than traditional topographic map production because of the development of powerful algorithms for processing aerial, drone, ground, and satellite imagery. To begin, orthophoto is a result of photogrammetry processing that employs the Digital Terrain Model (DTM), which is commonly observed in classic aerial photogrammetry. In orthophotos, you will frequently notice an effect in which the terrain representation is very accurate, but there is a tilt in the buildings and other tall structures, which is caused by the use of DTM, which only maps the natural shape of the earth, excluding vegetation and all man-made objects and structures. A true orthophoto map provides a vertical view of the earth's surface, eliminating building tilting and providing access to practically any location on the ground. Traditionally, measuring digital surface models has been highly complex and costly. It is generally accomplished through the use of LiDAR or ground measurements. The end result of drone photogrammetry is known as an orthomosaic. In actuality, an orthomosaic is comparable to a true orthophoto (since it is formed using a digital surface model), but it is often not based on a metric camera with accurate focal length and internal dimensions, as they are expensive and not readily accessible for UAVs. Furthermore, orthomosaics may be generated using both nadir and oblique images. Drone-based orthomosaics are created based on the digital surface model rather than as a separate survey like traditional aerial photogrammetry. The DSM is produced by drone photogrammetry based on the 3D point cloud, which is the initial output of data processing. Materials & MethodsThe huge success of online services like Google Earth, Google Maps, Bing Maps, and so on increased demand for orthophotos, resulting in the development of new algorithms and sensors. It is commonly understood that orthophoto quality is determined by image resolution, camera calibration, orientation accuracy, and DTM accuracy. Because digital cameras produce high-resolution imagery, one of the most important consequences in orthophoto generation is the spatial resolution of the DTM: standing objects, such as buildings, plants, and so on, exhibit radial displacement in the final orthophoto. In practical applications, orthophotos are utilized as small and medium scale maps; updated earth surface maps; three-dimensional urban scene reconstruction; village surveying; land planning; precision agriculture; desertification monitoring; land use surveying; and other sectors. True orthophotos are orthophotos that have been improved to minimize tilt inaccuracy and projection discrepancies. The true orthophoto is exceedingly stringent with the original image; the heading overlap and side overlap are at least 80% and 60% overlap, respectively. Due to the reduction of displacements produced by camera tilt and height difference, the use of orthophoto as a spatial data format with high geometric accuracy has found growing applications in recent years. With the growing relevance of geographic information systems, particularly in metropolitan areas, the use of orthophoto in conjunction with spatial data has grown. Because orthophoto contains correct spatial and textural information about complications, it is feasible to produce virtual reality by integrating it with 3D models, where it is able to properly quantify the height and plane location of complications during 3D viewing. In this research, a novel approach for generating orthophotos from Google Earth imagery for specific purposes was developed and qualitatively and quantitatively compared to orthophotos created from UAV images.Results and discussionThe result demonstrated the total error of orthomosaic generation from Google Earth imagery and UAV data to be 0.124 and 0.059 m/pixel, respectively. Moreover, the visual findings reveal that the edges of low-height barriers in the orthophoto generated from Google Earth images are superior to those in the orthophoto generated from drone imagery, but the edges of high-height obstacles, particularly those with noticeable shadows, are of poor quality. The findings of statistical parameters in quantitative surveys using randomly selected points in non-building regions revealed that the errors in the orthophoto derived from Google Earth data are 1.10 meters and 1.34 meters in terms of mean error and root mean square error (RMSE), respectively. In addition, the orthophoto generated from UAV data and Google Earth showed a 95% correlation and a 91% determination coefficient. In contrast, in building regions, the average height error and average square root error in the orthophoto generated from Google Earth data compared to UAV data were around 9 meters and 5 meters, respectively. Statistical metrics in these locations also revealed a low correlation of 80% and a determination coefficient of 65%.ConclusionsIn this research, a novel approach for generating orthophotos from Google Earth imagery for specific purposes was developed and qualitatively and quantitatively compared to orthophotos created from UAV images. As a result, as the height of the obstacles and the presence of lengthy shadows increase, so does the inaccuracy of the height component of the orthophoto derived from Google Earth imagery. Therefore, it is advised that orthophotos for special applications, flat regions, and hills be created using Google Earth images. Additionally, Google Earth data offers the following advantages: free of charge; the utilization of historical imagery to generate orthophotos; and nearly four times less processing time and volume.
Hassan Emami; Seyyed Ghasem Rostami
Abstract
Extended Abstract
Introduction
Unmanned Aerial System (UAS) photogrammetry now provides a low-cost, fast, and effective approach to real-time acquisition of high resolution and digital geospatial information, as well as automatic 3D modeling of objects, for a variety of applications including topographical ...
Read More
Extended Abstract
Introduction
Unmanned Aerial System (UAS) photogrammetry now provides a low-cost, fast, and effective approach to real-time acquisition of high resolution and digital geospatial information, as well as automatic 3D modeling of objects, for a variety of applications including topographical mapping, 3D city modelling, orthophoto generation, and cultural heritage preservation. UASs are known by a variety of names and acronyms, including aerial robots or simply drones, with UAV and drone being the most commonly used terminology. Because of the versatility of their on-board Global Navigation Satellite System (GNSS) navigation systems and inertial measurement unit (IMU) sensors, UASs open up new options for photogrammetric projects. In this research, the ability of four different state-of-the-art and professional drone-based software packages, including AgisoftMetashape, InphoUASmaster, Photomodeler UAS, and Pix4D Mapper, to generate a high density point cloud as well as a Digital Surface Model (DSM) and true orthoimage over barren, residential, green space, and uniform textured areas in urban and exurban areas is investigated.
Methodology
The following are the major processes in this study: image acquisition, point cloud, DSM, DEM generation, and accuracy assessment. Data planning and acquisition are the initial steps in commencing any project. The overlapping images are initially obtained using four data sets with distinct surface feature attributes and camera kinds with different shooting situations. The data sets that must be acquired include pictures taken with FC6310 (8.8 mm), NEX-5R (5.2 mm), and Canon IXUS 220HS (4.3 mm) cameras at varied flight heights and spatial resolutions ranging from 52 to 246 m. The four data sets, two of which are connected to Iran and two of which are related to other nations, were chosen from barren, residential, green space, and uniform texture areas. GPS coordinates for these photos must also be recorded using a GPS device. This is done to geo-reference the images for improved model accuracy. The calibration of the camera must also be addressed, and its characteristics and readings must be determined at the start of the project. The images will be calibrated first in order to determine camera pose estimate. The following stage is to compare survey measurements to model measurements in order to assess the overall correctness of the 3D model. The correctness of the point cloud, DSM, and 3D textured model is next evaluated. The accuracy evaluation evaluates the orientation correctness, and measurement uncertainties in the various modeling procedures. Finally, the various products of the mentioned software packages were statistically and qualitatively evaluated.
Results and discussion
The outcomes of this study demonstrate the ability of commercial photogrammetric software packages to do automatic 3D reconstruction of numerous attributes across urban and exurban regions using high quality aerial imagery. This assessment employs a variety of visual and geometric measurements to assess the quality of produced point clouds as well as the performance of the four software packages. According to the visual quality findings, AgisMesh software performs better in 3D modeling of all varieties of surfaces in all locations, but badly in the reconstruction of building edges in urban regions. Pix4D software, on the other hand, performs poorly in areas with uniform texture but excels at recognizing height changes and reconstructing building site boundaries. In terms of visual outcomes, the other software falls somewhere in the middle. In quantitative tests, they were tested first with checkpoints and then with randomly selected points in three distinct classes of urban and exurban regions. Check point findings revealed that the root mean square error (RMSE) in AgisMesh, UASmas, Pix4D, and PhUAS software packages was 2.82, 2.63, 5.84, and 3.03 cm, respectively. Furthermore, quantitative findings obtained by choosing random locations revealed that UASmas had an accuracy of 1.83, 1.20, and 2.74 cm, respectively, in three residential, barren, and green space zones. In addition to the 6.90, 2.96, and 7.24 cm accuracy of the PhUAS, the Pix4D was 4.72, 3.46, and 3.59 cm more accurate than AgisMesh software in the three stated classes. Table 1 displays the assessment findings based on the RMSE criterion.
Conclusions
The findings of this study indicate the capacity of specialist drone-based photogrammetric software packages to automatically reconstruct 3D features from high quality aerial images over desolate, residential, green space, and uniform texture environments. In this study, all conditions and parameters in all software were regarded the same, and owing to the similarity of statistical parameters, number of points, and so on in various products, only the discrepancies and their differences were discussed in depth. Various visual and geometric parameters are utilized in this evaluation to analyze the quality of generated 3D point clouds, DSM, and true orthophoto. AgisMesh offers a simple and easy user interface in general and visual assessment, and it is possible to describe and execute data from any camera, even unknown models, without utilizing coordinate images by utilizing powerful processing methods. In contrast, the UASmas program has a highly complex user interface, and the user must be familiar with all of the concepts of photogrammetry as well as the camera parameters file, which is not readily set. It is possible to manually alter restricted processing results in Pix4D. As a result, faulty results are not obtained in regions with the same texture, while production points in other areas are of poor quality. When compared to the other three applications, PhUAS fared poorly aesthetically and geometrically. The user must enter many parameters or thresholds in the processing phases. Therefore, the user must be sufficiently informed of the specifics of photogrammetric and machine vision algorithms to understand that the quality of software output is largely reliant on these factors. Furthermore, check point findings revealed that theRMSE in AgisMesh, UASmas, Pix4D, and PhUAS software packages was 2.82, 2.63, 5.84, and 3.03 cm, respectively. Furthermore, quantitative findings obtained by picking random points revealed that UASmas has an accuracy of 3.51 cm, PhUAS has 10.45 cm, and Pix4D was 6.87 cm more accurate than AgisMesh in three residential, barren, and green space regions. Taking into account all of the benefits and evaluations of visual and geometric correctness, the performance and accuracy of AgisMesh, UASmas, Pix4D, and PhUAS may be ranked from one to four, accordingly.